6 research outputs found
Recommended from our members
High reliability Android application for multidevice multimodal mobile data acquisition and annotation
We have completed the collection of one of the richest accurately annotated mobile dataset of modes of transportation and locomotion. To do this, we developed a highly reliable Android application called DataLogger capable of recording multisensor data from multiple synchronized smartphones simultaneously. The application allows real-time data annotation. We explain how we designed the app to achieve high reliability and ease of use. We also present an evaluation of the application in a big-data collection (750 hours, 950 GB of data, 17 different sensor modalities), analysing the data loss (less than 0.4‰) and battery consumption (≈6% on average per hour). The application is available as open source
The University of Sussex-Huawei locomotion and transportation dataset for multimodal analytics with mobile devices
Scientific advances build on reproducible research which need publicly available benchmark datasets. The computer vision and speech recognition communities have led the way in establishing benchmark datasets. There are much less datasets available in mobile computing, especially for rich locomotion and transportation analytics.
This paper presents a highly versatile and precisely annotated large-scale dataset of smartphone sensor data for multimodal locomotion and transportation analytics of mobile users. The dataset comprises 7 months of measurements, collected from all sensors of 4 smartphones carried at typical body locations, including the images of a body-worn camera, while 3 participants used 8 different modes of transportation in the southeast of the United Kingdom, including in London. In total 28 context labels were annotated, including transportation mode, participant’s posture, inside/outside location, road conditions, traffic conditions, presence in tunnels, social interactions, and having meals. The total amount of collected data exceed 950 GB of sensor data, which corresponds to 2812 hours of labelled data and 17562 km of traveled distance. We present how we set up the data collection, including the equipment used and the experimental protocol.
We discuss the dataset, including the data curation process, the analysis of the annotations and of the sensor data. We discuss the challenges encountered and present the lessons learned and some of the best practices we developed to ensure high quality data collection and annotation. We discuss the potential applications which can be developed using this large-scale dataset. In particular, we present how a machine-learning system can use this dataset to automatically recognize modes of transportations. Many other research questions related to transportation analytics, activity recognition, radio signal propagation and mobility modelling can be adressed through this dataset. The full dataset is being made available to the community, and a thorough preview is already publishe
Recommended from our members
Deep convolutional feature transfer across mobile activity recognition domains, sensor modalities and locations
Deep convolutional neural networks are powerful image and signal classifiers. One hypothesis is that kernels in the convolutional layers act as feature extractors, progressively highlighting more domain-specific features in upper layers of the network. Thus lower-level features might be suitable for transfer. We analyse this in wearable activity recognition by reusing kernels learned on a source domain on another target domain. We consider transfer between users, application domains, sensor modalities and sensor locations. We characterise the trade-offs of transferring various convolutional layers along model size, learning speed, recognition performance and training data. Through a novel kernel visualisation technique and comparative evaluations we identify what learned kernels are predominantly sensitive to, amongst sensor characteristics, motion dynamics and on-body placement. We demonstrate a ~17% decrease in training time at equal performance thanks to kernel transfer and we derive recommendations on when transfer is most suitable
Exploring human activity annotation using a privacy preserving 3D model
Annotating activity recognition datasets is a very time consuming process. Using lay annotators (e.g. using crowdsourcing) has been suggested to speed this up. However, this requires to preserve privacy of users and may preclude relying on video for annotation. We investigate to which extent using a 3D human model animated from the data of inertial sensors placed on the limbs allows for annotation of human activities. The animated model is shown to 6 people in a suite of tests in order to understand the accuracy of the labelling. We present the model and the dataset, then we present the experiments including the number of activities. We present 3 experiments where we investigate the use of a 3D model for i) activity segmentation, ii) for "openended" annotation where users freely describe the activity they see on screen, and iii) traditional annotation, where users pick one activity among a pre-defined list of activities. In the latter case, results show that users recognise with 56% accuracy when picking from 11 possible activities
Recommended from our members
Wearable beach volleyball serve type recognition
We present results on beach volleyball serve recognition and classification from a wrist-worn gyroscope deployed with semi-professional beach volleyball players. We trained a template-based recognition system based on a Warping Longest Common Subsequence algorithm to spot serves, and potentially distinguish among 4 common serve types. This shows the potential of wearable technologies in beach volleyball, which could offer precise sport analytics